Editable Free-Viewpoint Video using a Layered Neural Representation
1 ShanghaiTech University | 2 DGene | 2 Stereye |
| Paper | Video | Code | |
Original | Duplicate & Affine transformation | ||
Editing scene in postprocessing. Compared with the original video, the editted scene shows novel rendering result after we duplicate the performer in white shirt and apply affine transformations for two performers. |
Original |
Background + Performer 1 | Background + Performer 2 | Background |
Decompositing the scene while training. We decomposite this scene into three layers: performer 1, performer 2 and background. We show four rendering results in the different combiniations of our ST-NeRFs. These results show that our layered ST-NeRF representation can predict the occluded part of the layers at specific space time. |
Abstract
Generating free-viewpoint videos is critical for immersive VR/AR experience but recent neural advances still lack the editing ability to manipulate the visual perception for large dynamic scenes. To fill this gap, in this paper we propose the first approach for editable photo-realistic free-viewpoint video generation for large-scale dynamic scenes using only sparse 16 cameras. The core of our approach is a new layered neural representation, where each dynamic entity including the environment itself is formulated into a space-time coherent neural layered radiance representation called ST-NeRF. Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range. In our ST-NeRF, the dynamic entity/layer is represented as continuous functions, which achieves the disentanglement of location, deformation as well as the appearance of the dynamic entity in a continuous and self-supervised manner. We propose a scene parsing 4D label map tracking to disentangle the spatial information explicitly, and a continuous deform module to disentangle the temporal motion implicitly. An object-aware volume rendering scheme is further introduced for the re-assembling of all the neural layers. We adopt a novel layered loss and motion-aware ray sampling strategy to enable efficient training for a large dynamic scene with multiple performers, Our framework further enables a variety of editing functions, i.e., manipulating the scale and location, duplicating or retiming individual neural layers to create numerous visual effects while preserving high realism. Extensive experiments demonstrate the effectiveness of our approach to achieve high-quality, photo-realistic, and editable free-viewpoint video generation for dynamic scenes.
A higher quality video is available at: Bilibili.
Additional Results
Paper
Editable Free-Viewpoint Video using a Layered Neural Representation |
Code
We have published test part of the code and part of our preprocessed dataset for reproducing our results.
[code] |
Acknowledgements
The authors would like to thank Xin Chen, Dali Gao from DGene Digital Technology Co., Ltd. for processing the dataset and figures. Besides, we thank Boyuan Zhang from ShanghaiTech University for producing a supplementary video and Yingwenqi Jiang, Wenman Hu, and Jiajie Yang for collecting the dataset as performers. Finally, we thank the performers in ShanghaiTech University New Year Party 2021. This work was supported by NSFC programs (61976138, 61977047), the National Key Research and Development Program (2018YFB2100500), STCSM (2015F0203-000-06) and SHMEC (2019-01-07-00-01-E00003).
Citation
@inproceedings{zhang2021stnerf, title={Editable Free-Viewpoint Video using a Layered Neural Representation}, author={Jiakai, Zhang and Xinhang, Liu and Xinyi, Ye and Fuqiang, Zhao and Yanshun, Zhang and Minye, Wu and Yingliang, Zhang and Lan, Xu and Jingyi, Yu }, year={2021}, booktitle={ACM SIGGRAPH}, }